Careers

←Job Openings

Bigdata/Spark Engineer – M Clean VA Refer a Person Apply for this Job

Job Description

  • Understand Platform of Client and perform Platform Design, Development, support and maintenance of client involving various services of Bigdata such as Hadoop, HDFS, MapReduce, Yarn, Hive, HBase etc.
  • Provide support and leadership from projects definition to project closure, including interpreting business requirements and drafting them into technical specifications
  • Design and development of frameworks for data ingestion, transformation and reporting services.
  • processes and methodologies.
  • Design and Maintain the Oozie tool to automate the production jobs of Platform.
  • Design and develop data analytics products using big data technologies such as Hadoop, Spark, Kafka, Hive.
  • Responsible to code medium and complex applications to deploy into Hadoop and Spark clusters of Platform.
  • Responsible for data ingestion from multiple data sources, maintaining and scaling the big data platform infrastructure that supports client’s business use cases.
  • Responsible to implement the solutions in Object oriented and functional languages such as Java, Scala and Python with best design patterns and data structures.
  • Identify and recommend new ways to streamline the data centric applications and optimize the performance.
  • Involve in migration of the clusters to AWS cloud.
  • Load the data assets subscription model data flow with the help of Hortonworks NIFi
  • Deliver and deploy applications in Cloud environment.
  • Data consumption patterns by integrating EDP with BI tools such as Tableau etc.
  • Automation & job scheduling using Oozie & Unix Shell Scripts
  • Design and Implement Real-time/Near-Realtime data streaming patterns using Kafka & Spark-Streaming.
  • Integrate Customer Master C360 with Platform.
  • Troubleshooting and resolving software issues occurring in the development of the process and documenting test cases to ensure quality.
  • Support Bigdata services of EIM Platform that support architecture and engineering frameworks such as Batch ingestion and processing patterns using Hive, Pig, MapReduce & Spark
  • Document the functional and technical requirements by following client defined

Required Skills:

  • A minimum of bachelor's degree in computer science or equivalent.
  • Cloudrea Hadoop(CDH), Cloudera Manager, Informatica Bigdata Edition(BDM), HDFS, Yarn, MapReduce, Hive, Impala, KUDU, Sqoop, Spark, Kafka, HBase, Teradata Studio Express, Teradata, Tableau, Kerberos, Active Directory, Sentry, TLS/SSL, Linux/RHEL, Unix Windows, SBT, Maven, Jenkins, Oracle, MS SQL Server, Shell Scripting, Eclipse IDE, Git, SVN
  • Must have strong problem-solving and analytical skills
  • Must have the ability to identify complex problems and review related information to develop and evaluate options and implement solutions.

If you are interested in working in a fast-paced, challenging, fun, entrepreneurial environment and would like to have the opportunity of being a part of this fascinating industry, Send resumes. to HSTechnologies LLC, 2801 W Parker Road, Suite #5 Plano, TX - 75023 or email your resume to hr@sbhstech.com.